Section: New Results
Penalty analysis for sparse solutions of underdeterminated linear systems of equations
Participants : Emmanuel Soubies, Laure Blanc-Féraud, Gilles Aubert.
In many applications such as compression to reduce data storage, compressed sensing to recover a signal from fewer measurements, source separation, image decomposition and many others, one aims to compute a sparse solution of an underdetermined linear systems of equations. Thus finding such sparse solutions is currently an active research topic. This problem can be formulated as a least squares problem regularized with the -norm. We consider the penalized form
where , represents the data and is an hyperparameter characterizing the trade-off between data fidelity and sparsity.
It is well known that reaching a global solution of this functional is a NP-hard combinatorial problem. Besides the non-convexity of this 'norm', its discontinuity at zero makes the minimization of the overall functional a hard task. In this work we focus on non-convex continuous penalties widely used to approximate the -norm which usually lead to better results than the classical convex relaxation since they are more '-like'. Based on some results in one dimension, we propose the Exact penalty (El0). In one dimension and when the matrix is orthogonal, replacing the -norm in (2 ) by this penalty gives the convex hull of the overall function. Then we have proved, for any matrix , that the global minimizers of the El0 objective function are the same as for the functional. We also demonstrate that all the local minimizers of this approximated functional are local minimizers for while numerical experiments show that the reciprocal is in general false and that the objective function penalized with El0 admits less local minimizers than the functional. Then, this work provides in some way an equivalence between the initial problem and its approximation using the El0 penalty. One can address problem (2 ) by replacing the -norm with the El0 penalty which provides better properties for the objective function although the problem remains non-convex. Recently, some authors have proposed algorithms and proved their convergence to critical points of non-smooth non-convex functionals like El0. Based on such algorithms, we propose a macro algorithm and prove its convergence to a (local) minimizer of the initial functional.